2 research outputs found

    Hardware for Mobile Positioning : Considerations on Design and Development of Cost-Efficient Solutions

    Get PDF
    The estimation of a moving agent's position in an unknown environment is a problem formulated in its current form already in the 1980s. Emphasis on localization and mapping problems has grown rapidly in the last two decades driven by the increased computational capability of especially handheld systems and a large number of target applications in various fields, ranging from self-driving cars and geomatics to robotics and virtual/augmented reality. Besides the algorithms for positioning, hardware plays a major role as a backbone for enabling accurate, robust and flexible position estimation solutions. This thesis gives an overview of sensors utilized in mobile positioning with a focus on passive visual-inertial sensors as an alternative to more expensive active-ranging solutions. The main research interest of the thesis is the feasibility of developing and implementing a cost-efficient hardware solution for positioning. Visual, inertial and satellite positioning sensors' advantages, performance parameters, sources of error and physical requirements are considered. Sensor integration and both sensor and system-level calibration in a multisensor setup are discussed. Levels of developer involvement and options for hardware development approaches are presented, mainly ready-made modular solutions, building on top of intermediate products and development from scratch. Hardware development processes are demonstrated by implementing a synchronized visual-inertial positioning system including two pairs of stereo cameras, an inertial measurement unit and a Real-Time Kinematic capable satellite positioning solution. The system acts as a cost-efficient example for options and decisions required on the selection of sensors and computational subsystems supporting the sensor hardware, integration and continuous temporal synchronization of sensors as well as requirements and manufacturing options for system enclosures. Even though the direct costs of the solution seem inexpensive compared to competitive solutions, accounting for the development time and associated risk makes hardware development from scratch less attractive option compared to other approaches. For a proof-of-concept or a case in which a very limited number of end products are produced, implementation from the ground up is most likely time-consuming and thus ends up being an expensive endeavor compared to other approaches. Also, the benefits of control over the details of hardware and integration may not be fully utilized

    Simultaneous Localization and Mapping with Apple ARKit

    No full text
    Simultaneous Localization and Mapping (SLAM) methods aim to map the environment with a moving sensor while keeping track of the sensor’s location within the map. Many types of SLAM systems have been proposed based on different sensors, one being visual SLAM utilizing visual information obtained by a camera. The objectives of this thesis are to research the main stages and limitations of the visual SLAM methods and to examine Apple ARKit API’s functionality and applications regarding SLAM methods. The thesis is divided into research of the visual SLAM methods, research of Apple ARKit’s functionality and experiments with ARKit. The first part of the thesis examines the essential components of a feature-based monocular SLAM. The method is based on estimating the camera’s relative rotation and translation based on images taken from the environment. The main stages are extracting and matching features, estimating the camera’s extrinsic parameters based on the features, triangulation to obtain depth from the environment and loop closure. Loop closure is a stage where the camera returns to a previously visited location and results in global optimization being performed to the created map. The performance of visual SLAM methods decreases in a dynamic or repetitive environment, environment with a low number of features and when external disturbance affects the sensor. The accuracy of estimating the location can be increased by adding supporting sensors as a part of the SLAM implementation, e.g. inertial measurement unit (IMU). Sensor data fusion between pose estimated from visual data and IMU can be performed for example by extended Kalman filters. In the second part of the thesis, Apple ARKit API and the functionality of the ARWorldTracking are presented. The experiments with apps utilizing ARKit are presented in the third part of the thesis. The first app focuses on ARKit’s visual inertial odometry and position estimation. Based on the research, ARKit had difficulties with local loop closure in a repetitive environment and the estimation of vertical position began to drift in large environments with external disturbance. The second part of the experiments examines the possibilities of scene reconstruction with ARKit. Two possibilities for reconstruction are presented: obtaining the images and camera parameters with ARKit and visualizing the features and their positions in an augmented reality scene. Images and camera parameters could be utilized in scene reconstruction with Structure from Motion. A sparse model of an object can be obtained from the features and their positions recognized by ARKit
    corecore